Hierarchical DP Mixture

ثبت نشده
چکیده

The model is a DP mixture of normals for related random probability measures Hj . Each random measure is assumed to arise as a mixture Hj = F0 + (1 − )Fj of one common distribution F0 and a distribution Fj that is specific to the j-th submodel. See Mueller, Quintana and Rosner (2004) for details of the model. In summary, the implemented model is as follows. Without loss of generality we assume that each submodel corresponds to a different study in a set of related studies. Let θij denote the i-th observation in the j-th study (we use θ, assuming that the model would typically be used for a random effects distribution). We assume that θji, i = 1, . . . , nj are samples from a random probability measure for the j-th study, which in turn is a mixture of a measure F0 that is common to all studies, and an idiosyncratic measure Fj that is specific to the j-th study. θji ∼ F0 + (1− )Fj

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Nested Hierarchical Dirichlet Processes for Multi-Level Non-Parametric Admixture Modeling

Dirichlet Process(DP) is a Bayesian non-parametric prior for infinite mixture modeling, where the number of mixture components grows with the number of data items. The Hierarchical Dirichlet Process (HDP), often used for non-parametric topic modeling, is an extension of DP for grouped data, where each group is a mixture over shared mixture densities. The Nested Dirichlet Process (nDP), on the o...

متن کامل

Infinite Hidden Markov Models via the Hierarchical Dirichlet Process

Category: graphical models. In this presentation, we propose a new formalism under which we study the infinite hidden Markov model (iHMM) of Beal et al. [2]. The iHMM is a hidden Markov model (HMM) in which the number of hidden states is allowed to be countably infinite. This is achieved using the formalism of the Dirichlet process. In particular, a two-level urn model is used to determine the ...

متن کامل

A Convex Exemplar-based Approach to MAD-Bayes Dirichlet Process Mixture Models

MAD-Bayes (MAP-based Asymptotic Derivations) has been recently proposed as a general technique to derive scalable algorithm for Bayesian Nonparametric models. However, the combinatorial nature of objective functions derived from MAD-Bayes results in hard optimization problem, for which current practice employs heuristic algorithms analogous to k-means to find local minimum. In this paper, we co...

متن کامل

Small-Variance Asymptotics for Exponential Family Dirichlet Process Mixture Models

Sampling and variational inference techniques are two standard methods for inference in probabilistic models, but for many problems, neither approach scales effectively to large-scale data. An alternative is to relax the probabilistic model into a non-probabilistic formulation which has a scalable associated algorithm. This can often be fulfilled by performing small-variance asymptotics, i.e., ...

متن کامل

Posterior Simulation in Countable Mixture Models for Large Datasets

Mixture models, or convex combinations of a countable number of probability distributions, offer an elegant framework for inference when the population of interest can be subdivided into latent clusters having random characteristics that are heterogeneous between, but homogeneous within, the clusters. Traditionally, the different kinds of mixture models have been motivated and analyzed from ver...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007